Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks
نویسندگان
چکیده
Biological neurons communicate with a sparing exchange of pulses – spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on recent insights in neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on adaptive spiking neurons. These spiking neurons efficiently encode information in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while homeostatically optimizing their firing rate. In the proposed paradigm of spiking neuron computation, neural adaptation is tightly coupled to synaptic plasticity, to ensure that downstream neurons can correctly decode upstream spiking neurons. We show that this type of network is inherently able to carry out asynchronous and event-driven neural computation, while performing identical to corresponding artificial neural networks (ANNs). In particular, we show that these adaptive spiking neurons can be drop in replacements for ReLU neurons in standard feedforward ANNs comprised of such units. We demonstrate that this can also be successfully applied to a ReLU based deep convolutional neural network for classifying the MNIST dataset. The ASNN thus outperforms current Spiking Neural Networks (SNNs) implementations, while responding (up to) an order of magnitude faster and using an order of magnitude fewer spikes. Additionally, in a streaming setting where frames are continuously classified, we show that the ASNN requires substantially fewer network updates as compared to the corresponding ANN.
منابع مشابه
Efficient Computation in Adaptive Artificial Spiking Neural Networks
Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using...
متن کاملGradient Descent for Spiking Neural Networks
Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here,...
متن کاملImproving the Izhikevich Model Based on Rat Basolateral Amygdala and Hippocampus Neurons, and Recognizing Their Possible Firing Patterns
Introduction: Identifying the potential firing patterns following different brain regions under normal and abnormal conditions increases our understanding of events at the level of neural interactions in the brain. Furthermore, it is important to be capable of modeling the potential neural activities to build precise artificial neural networks. The Izhikevich model is one of the simplest biolog...
متن کاملLong short-term memory and Learning-to-learn in networks of spiking neurons
Networks of spiking neurons (SNNs) are frequently studied as models for networks of neurons in the brain, but also as paradigm for novel energy efficient computing hardware. In principle they are especially suitable for computations in the temporal domain, such as speech processing, because their computations are carried out via events in time and space. But so far they have been lacking the ca...
متن کاملMASPINN: Novel Concepts for a Neuro-Accelerator for Spiking Neural Networks
We present the basic architecture of a Memory Optimized Accelerator for Spiking Neural Networks (MASPINN). The accelerator architecture exploits two novel concepts for an efficient computation of spiking neural networks: weight caching and a compressed memory organization. These concepts allow a further parallelization in processing and reduce bandwidth requirements on accelerator’s components....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1609.02053 شماره
صفحات -
تاریخ انتشار 2016